59 research outputs found

    XgBoost Hyper-Parameter Tuning Using Particle Swarm Optimization for Stock Price Forecasting

    Get PDF
    Investment in the capital market has become a lifestyle for millennials in Indonesia as seen from the increasing number of SID (Single Investor Identification) from 2.4 million in 2019 to 10.3 million in December 2022. The increase is due to various reasons, starting from the Covid-19 pandemic, which limited the space for social interaction and the easy way to invest in the capital market through various e-commerce platforms. These investors generally use fundamental and technical analysis to maximize profits and minimize the risk of loss in stock investment. These methods may lead to problem where subjectivity and different interpretation may appear in the process. Additionally, these methods are time consuming due to the need in the deep research on the financial statements, economic conditions and company reports. Machine learning by utilizing historical stock price data which is time-series data is one of the methods that can be used for the stock price forecasting. This paper proposed XGBoost optimized by Particle Swarm Optimization (PSO) for stock price forecasting. XGBoost is known for its ability to make predictions accurately and efficiently. PSO is used to optimize the hyper-parameter values of XGBoost. The results of optimizing the hyper-parameter of the XGBoost algorithm using the Particle Swarm Optimization (PSO) method achieved the best performance when compared with standard XGBoost, Long Short-Term Memory (LSTM), Support Vector Regression (SVR) and Random Forest. The results in RSME, MAE and MAPE shows the lowest values in the proposed method, which are, 0.0011, 0.0008, and 0.0772%, respectively. Meanwhile, the  reaches the highest value. It is seen that the PSO-optimized XGBoost is able to predict the stock price with a low error rate, and can be a promising model to be implemented for the stock price forecasting. This result shows the contribution of the proposed method

    Automated-tuned hyper-parameter deep neural network by using arithmetic optimization algorithm for Lorenz chaotic system

    Get PDF
    Deep neural networks (DNNs) are very dependent on their parameterization and require experts to determine which method to implement and modify the hyper-parameters value. This study proposes an automated-tuned hyper-parameter for DNN using a metaheuristic optimization algorithm, arithmetic optimization algorithm (AOA). AOA makes use of the distribution properties of mathematics’ primary arithmetic operators, including multiplication, division, addition, and subtraction. AOA is mathematically modeled and implemented to optimize processes across a broad range of search spaces. The performance of AOA is evaluated against 29 benchmark functions, and several real-world engineering design problems are to demonstrate AOA’s applicability. The hyper-parameter tuning framework consists of a set of Lorenz chaotic system datasets, hybrid DNN architecture, and AOA that works automatically. As a result, AOA produced the highest accuracy in the test dataset with a combination of optimized hyper-parameters for DNN architecture. The boxplot analysis also produced the ten AOA particles that are the most accurately chosen. Hence, AOA with ten particles had the smallest size of boxplot for all hyper-parameters, which concluded the best solution. In particular, the result for the proposed system is outperformed compared to the architecture tested with particle swarm optimization

    Best lecturer decision support system using method Analytical Hierarchy Process (AHP) and simple adaptive weighting (SAW)

    Get PDF
    Data processing as one of information technology area has wide range of applications including the selection of best lecturer in universities. Best Lecturer Assessment is required by the university to identify qualified human resources and in the future this process will give motivations to the lectures to be more productive and increase the quality of doing their jobs. In Buana Perjuangan University, Karawang, the selection of best lecturer is still conducted manually, where giving result to be subjective and questionable. Additionally, decision support system method has not been applied in determining the best lecturer. In this study, we proposed hybrid method which combines Analytical Hierarchy Process (AHP) and Simple Additive Weighting (SAW) to determine the best lecturer by using five criteria, namely attendance at lecture meetings, academic rank, number of publications, additional assignments, and education levels. The data sample used were obtained using a random sampling technique on lecturer data, provided by the University Data and Information Center. The study involves two phases. Phase 1 involves in developing a questionnaire that contains a list of criteria that a best lecturer must have. In Phase 2, an AHP decision matrix is developed, and the candidates will be ranked based on the selected criteria obtained from Phase 1 and their weight values. The result will then be ranked by using SAW to obtain even more accurate decision of best lecturer. The results obtained indicate that the value of the Consistency Index (CI) for all respondents is consistent, 0.8 in average. Then, the CI result is used to determine the Consistency Ratio (CR), where the result is also consistent for all of the respondents, -0.7 in average. These result shows that the proposed method has the same output with the expert (respondents) decisions

    Parameter Prediction for Lorenz Attractor by using Deep Neural Network

    Get PDF
    Nowadays, most modern deep learning models are based on artificial neural networks. This research presents Deep Neural Network to learn the database, which consists of high precision, a strange Lorenz attractor. Lorenz system is one of the simple chaotic systems, which is a nonlinear and characterized by an unstable dynamic behavior. The research aims to predict the parameter of a strange Lorenz attractor either yes or not. The primary method implemented in this paper is the Deep Neural Network by using Phyton Keras library. For the neural network, the different number of hidden layers are used to compare the accuracy of the system prediction. A set of data is used as the input of the neural network, while for the output part, the accuracy of prediction data is expected. As a result, the accuracy of the testing result shows that 100% correct prediction can be achieved when using the training data. Meanwhile, only 60% correct prediction is achieved for the new random data

    Waypoint navigation of quad-rotor MAV

    Get PDF
    Quad-rotor Micro Aerial Vehicle (MAV) is a multi-rotor MAV with 4 propellers which propel the MAV up to the air and move around. It has high maneuverability to move around, such as roll, pitch and yaw movements. However, line of sight and radio control effective range are the major limitation for the MAVs which significantly shorten the travel distance. Therefore, we proposed a waypoint navigation quad-rotor MAV based on PID controller in this paper. User can set mission with multiple waypoint and the PID controller to control MAV autonomously moving along the waypoint to the desired position without remotely controlled by radio control and guidance of pilot. The results show PID controller is capable to control MAV to move to the desired position with high accuracy. As the conclusion, the result of real flight experiment shows that the %OS of designed PID controller for x is 13% while y is 11.89% and z is 2.34%. Meanwhile, steadystate error for all axis are 0%. This shows that the performance of PID controller is satisfied. Hence, the quadrotor MAV could move to the desired location via waypoint navigation without guidance of pilot

    DEVELOPING TROPICAL LANDSLIDE SUSCEPTIBILITY MAP USING DINSAR TECHNIQUE OF JERS-1 SAR DATA

    Get PDF
    Comprehensive information in natural disaster area is essential to prevent and mitigate people from further damage that might occur before and after such event. Mapping this area is one way to comprehend the situation when disaster strikes. Remote sensing data have been widely used along with GIS to create a susceptibility map. The objective of this study was to develop existing landslides susceptibility map by integrating optical satellite images of Landsat ETM and ASTER with Japanese Earth Resource Satellites (JERS-1) Synthetic Aperture Radar (SAR) data complemented by ground GPS and feature measurement into a Geographical Information Systems (GIS) platform. The study area was focused on a landslide event occurred on 26 March 2004 in Jeneberang Watershed of South Sulawesi, Indonesia. Change detection analysis was used to extract thematic information and the technique of Differential SAR Interferometry (DInSAR) was employed to detect slight surface displacement before the landslide event. The DInSAR processed images would be used to add as one weighted analysis factor in creating landslide susceptibility map. The result indicated that there was a slight movement of the slope prior to the event of landslide during the JERS-1 SAR data acquisition period of 1993-1998. Keywords: Optical Images, JERS-1 SAR, DInSAR, Tropical Landslide, GIS, Susceptibility Map 1. Introduction Recently, natural disasters increased in terms of frequency, complexity, scope, and destructive capacity. They have been particularly severe during the last few years when the world has experienced several large-scale natural disasters such as the Indian Ocean earthquake and tsunami; floods and forest fires in Europe, India and China, and drought in Africa (Sassa, 2005). Mapping such natural disaster areas is essential to prevent and mitigate people from further damage that might occur before and after such event. In Indonesia in particular, in these recent years natural disasters occurred more frequently compared to the last decade (BNPB, 2008). Once within a month in 2011, in three different islands, Indonesia was stricken by earthquake, tsunami, flash floods, and volcanic eruptions with severe fatalities to the people and environment. It was obvious that Indonesia was prone to natural disaster due to its position of being squeezed geologically by three major world plates and this fact makes Indonesia one of the most dangerou

    PID controller design for mobile robot using Bat Algorithm with Mutation (BAM)

    Get PDF
    By definition, a mobile robot is a type of robot that has capability to move in a certain kind of environment and generally used to accomplish certain tasks with some degrees of freedom (DoF). Applications of mobile robots cover both industrial and domestic area. It may help to reduce risk to human being and to the environment. Mobile robot is expected to operate safely where it must stay away from hazards such as obstacles. Therefore, a controller needs to be designed to make the system robust and adaptive. In this study, PID controller is chosen to control a mobile robot. PID is considered as simple yet powerful controller for many kind of applications. In designing PID, user needs to set appropriate controller gain to achieve a desired performance of the control system, in terms of time response and its steady state error. Here, an optimization algorithm called Bat Algorithm with Mutation (BAM) is proposed to optimize the value of PID controller gain for mobile robot. This algorithm is compared with a wellknown optimization algorithm, Particle Swarm Optimization (PSO). The result shows that BAM has better performance compared to PSO in term of overshoot percentage and steady state error. BAM gives 2.29% of overshoot and 2.94% of steady state error. Meanwhile, PSO gives 3.07% of overshoot and 3.72% of steady state error

    PID Controller Design for Mobile Robot Using Bat Algorithm with Mutation (BAM)

    Get PDF
    By definition, a mobile robot is a type of robotthat has capability to move in a certain kind of environmentand generally used to accomplish certain tasks with somedegrees of freedom (DoF). Applications of mobile robots coverboth industrial and domestic area. It may help to reduce risk tohuman being and to the environment. Mobile robot is expectedto operate safely where it must stay away from hazards such asobstacles. Therefore, a controller needs to be designed to makethe system robust and adaptive. In this study, PID controller ischosen to control a mobile robot. PID is considered as simpleyet powerful controller for many kind of applications. Indesigning PID, user needs to set appropriate controller gain toachieve a desired performance of the control system, in termsof time response and its steady state error. Here, anoptimization algorithm called Bat Algorithm with Mutation(BAM) is proposed to optimize the value of PID controller gainfor mobile robot. This algorithm is compared with a wellknownoptimization algorithm, Particle Swarm Optimization(PSO). The result shows that BAM has better performancecompared to PSO in term of overshoot percentage and steadystate error. BAM gives 2.29% of overshoot and 2.94% ofsteady state error. Meanwhile, PSO gives 3.07% of overshootand 3.72% of steady state error

    Technical job distribution at BSD SHARP service center using combination of naïve Bayes and K-Nearest neighbour

    Get PDF
    Works distribution is a routine carried out every day by the head of the branch in the SHARP Service Center. The accuracy of the labor division is very important to get customer satisfaction. Inappropriate work distribution can increase complaints from customers. Currently, works distribution in SHARP Service Center is carried out manually, where the works received on the selected system is then shared through the document provided. Time taken for this process is about 1.42 minutes on average for each damage reports. Speed of Service also depends on the Head of Department's expertise and experience. In this study, an automatic system based on Machine Learning will be designed for the technicians work distribution by using a combination of k Nearest Neighbor (k-NN) and Naïve Bayes. Naïve Bayes algorithm is used to improve the feature extraction accuracy by considering the feature below the average (α). Meanwhile, k-NN algorithm is used to classify the experimental data. From the study, it is found that the best of k value for k-NN algorithm is 15. It is known that a high number of accuracy values, the labor distribution can be more accurate. The validation of the proposed method is conducted by using a confusion matrix with a composition of 80% training data and 20% test data. The single Classifier test with the Naïve Bayes algorithm produces the highest accuracy value of 72.7%, while using k-NN algorithm is 81.5%. With a combination of Naive Bayes and k-NN algorithms, the accuracy value is increasing to 86%. This result shows that the proposed method improves the accuracy by 13.3% on single Naive Bayes algorithms and 4% on a single k-NN algorithm. The results obtained show that in the manual process, the average time per job is 1.42 minutes, while by using the proposed method, the average processing time is around 0.03 seconds per job. An increase of 2480 times faster is found and confirmed during the implementation of the proposed method

    Comparison of accuracy performance based on normalization techniques for the features fusion of face and online signature

    Get PDF
    Feature level fusion in multimodal biometrics system is able to produce higher accuracy compared to score level and decision level of fusion due to the richer information provided. Features from multi modalities are fused prior to a classification phase. In this paper, features from face (image based) and online signature (dynamic based) are extracted using Linear Discriminant Analysis (LDA). The aim of this research is to recognize an authorized person based on both features. Due to the different domain, the features of one modality might have dominant values that will superior in classification phase. Thus, that aim is unable to be achieved if the classification will rely more on one modality rather than both. To overcome the issue, features normalization is deployed to the extracted features prior to the fusion process. The normalization is performed to standardize the range of features value. A few normalization techniques have been focused in this paper, namely min–max, z-score, double sigmoid function, tanh estimator, median absolute deviation (MAD) and decimal scaling. From those techniques, which normalization technique is most applicable to this case is observed based on best accuracy performance of the system. After the classification phase, the highest accuracy is 98.32% that is obtained from the decimal scaling normalization. It shows that technique is able to give an outperform result compared to other techniques
    corecore